Journal: Proceedings of the National Academy of Sciences of the United States of America
Article Title: Callousness, exploitativeness, and tracking of cooperation incentives in the human default network.
doi: 10.1073/pnas.2307221121
Figure Lengend Snippet: Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with reinforcement history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Article Snippet: Reinforcement learning models were fit using the variational Bayesian analysis toolbox in MATLAB, which leverages parameter estimates from the full sample to constrain individual parameter estimates, reducing the risk of misestimation in poorly performing participants (64).
Techniques: Modification